Round Overview
Discuss this match
This is a straightforward implementation problem. For each character index i in the string, check if its left neighbor (i-1) is different and also check if its right neighbor (i+1) is different. In any of those cases, the index is a special coin. Note that the first character has no left neighbor and the last character has no right neighbor, so we shouldn’t do those specific checks.
1
2
3
4
5
6
7
8
9
10
11
int countCoins(string state) {
int n = state.size();
int c = 0;
for (int i = 0; i < n; i++) {
if (((i > 0) && (state[i - 1] != state[i])) || ((i + 1 < n) && (state[i + 1] != state[i]))) {
c++;
}
}
return c;
}
ExplodingRobots
Problem Details
The same instructions are sent to both robots. Our objective is to answer if it is possible to have the two robots meet in the same coordinates by picking which instructions won’t execute in each of the two robots. So what happens regarding each instruction? Take a single (L) instruction, we have 4 options:
Both robots execute the L instructions. Note how this doesn’t change the relative distance between the robots.
Neither robots execute the instructions this has the same effect as the previous case, the relative distance doesn’t change.
Robot 1 executes the instruction and Robot 2 doesn’t, this increases robot 1’s x coordinate and doesn’t change robot 2’s coordinate.
Robot 2 executes the instruction and robot 1 doesn’t.
In conclusion we really have 3 options:
Keep the horizontal distance exactly the same: This is the option to choose when the horizontal distance is already 0.
Increase the horizontal distance: One robot moves horizontally and the other doesn’t, making them further apart. This is always a bad idea, because we want them to meet.
Decrease the horizontal distance: This should be what we choose when the distance is not 0.
Note how a right ® instruction works in exactly the same way. It helps us change the horizontal distance. Up (U) and down (L) instructions work in a similar way but for the vertical distance.
Finally, if the initial horizontal distance is d_x, we just need to count the total number of L/R instructions. If this count is large enough, then we have enough opportunities to decrease the distance to 0. The same is done about the initial vertical distance d_y: Count if the number of D/U instructions is at least d_y.
1
2
3
4
5
6
7
8
9
10
11
12
string canExplode(int x1, int y1, int x2, int y2, string instructions) {
int mx = 0, my = 0;
for (char d: instructions) {
if (d == 'L' || d == 'R') {
mx++;
} else {
my++;
}
}
return (abs(x2 - x1) <= mx && abs(y2 - y1) <= my) ? "Explosion" : "Safe";
}
In problems related to conditions involving the xor operator, it tends to be useful to treat each bit position independently. In this case, the table of xor results s can be separated into many different rules, one for each bit position (since all integers will have at most 30 bits, let’s assume 30 bit positions). For example, for indices i and j and bit position k, if the k-th bit position of s[i*n + j] is 0, then the xor between the k-th bit position of elements i and j should be 0. Each of the n numbers will come with its own bit for position k. So we can find, for position k, which combinations of 1 or 0s to use for all numbers in their k-th position. There are at most 2^n of such combinations, with n <= 10 so we can try them all.
Representing sets using bit masks is a trick used commonly during contests. If you are unfamiliar with this, you should give bmerry’s essential tutorial: A bit of fun: fun with bits a try.
There is an extra constraint that prevents us from just multiplying the number of combinations for each bit position. It’s that each of the numbers in the list must be between 0 and m , inclusive. This brings issues because the bit positions are no longer independent of each other. Imagine the most significant bit positions have already been decided. Depending on that decision and if it makes some numbers in the list already smaller than m, then some options for the less significant positions will change. Here is an example with just a number. Imagine m was 1101_2, and we already decided that the first two bit positions of a number contain 1 and 1, the third bit position cannot contain 1, because the overall number would become 111x, which is larger than 1101_2.
When filling the bits in position k, we need to know which of the n number are already smaller than m. This can be described as a bit mask eq_mask, the i-th bit of eq_mask is 1 if so far all the bits greater than k in number i were equal to the respective bits in m. If they were not equal, then we can freely choose 0 or 1 for the k-th bit of number i. Else we must choose a bit smaller than or equal than the respective bit of m.
This translates into a dynamic programming solution. With recurrence relation `f(“eq_mask”, i)". We are currently filling the bits in position bit i and “eq_mask” determines which of the n numbers are still equal to m due to the previously-assigned bit positions. We can once again, try all combinations of n for the positions to fill for each number. But this time we need to be careful not to use a bit larger than m when the number is still equal to m. And also the combination must be valid for s. In case everything is valid, we need to generate a new “eq_mask”’ because some numbers are no longer equal to m. Then we can continue filling the next positions. So there are `f(“eq_mask”’, i)" ways to fill the remaining numbers for this specific combination of bits. Repeat this for each valid combination of bits.
The complexity. There are O( n 2^n ) possible state for f() and for each parameter state we try O(2^n) combinations. Evaluating each combination requires O(n) checks to make sure the bits don’t make any number greater than m and checking if it is valid for s can be done by precalculating it for all (mask, i) pairs in an earlier step. In total the complexity is : O(n^n 2^(2n)), which will be fine. If we make sure not to do any unnecessary operations for invalid combinations, the approach will be much faster. There are just not so many valid combinations in practice.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
const int MOD = 1000000007;
int countLists(vector < int > s, int m) {
// translate m to binary:
vector < int > bit_m;
for (int i = 0; i < 30; i++) {
bit_m.push_back(m % 2);
m /= 2;
}
reverse(bit_m.begin(), bit_m.end());
int n = 0;
while ((n + 1) * (n + 1) <= s.size()) {
n++;
}
int t = bit_m.size();
// precalculate which bit combinations are valid for each bit position:
bool valid_mask[1 << n][t];
for (int mask = 0; mask < (1 << n); mask++) {
for (int i = 0; i < t; i++) {
valid_mask[mask][i] = true;
for (int a = 0; a < n; a++) {
for (int b = 0; b < n; b++) {
int x = (mask >> a) & 1;
int y = (mask >> b) & 1;
int z = (s[a * n + b] >> (t - i - 1)) & 1;
if ((x ^ y) != z) {
valid_mask[mask][i] = false;
}
}
}
}
}
// the dynamic programming
int dp[1 << n][t + 1];
// the base case, all bits have been assigned, it doesn't matter if
// the number is less or equal to m, all those cases are valid
for (int eq_mask = 0; eq_mask < (1 << n); eq_mask++) {
dp[eq_mask][t] = 1;
}
for (int i = t - 1; i >= 0; i--) {
for (int eq_mask = 0; eq_mask < (1 << n); eq_mask++) {
// Calculate each f(eq_mask, i)
dp[eq_mask][i] = 0;
// try all combinations:
for (int mask = 0; mask < (1 << n); mask++) {
int neq_mask = 0;
bool valid = valid_mask[mask][i];
for (int j = 0;
(j < n) && valid; j++) {
// verify if this bit won't make the number larger than m
int b = (mask >> j) & 1;
int e = (eq_mask >> j) & 1;
int ne = e;
if (e == 1) {
if (b > bit_m[i]) {
valid = false;
} else if (b < bit_m[i]) {
// no longer equal
ne = 0;
}
}
neq_mask = (neq_mask | (ne << j));
}
if (valid) {
dp[eq_mask][i] = (dp[eq_mask][i] + dp[neq_mask][i + 1]) % MOD;
}
}
}
}
return dp[(1 << n) - 1][0];
}
Let’s try answering [Given X, can we build at least X ships?] if we could answer this question easily then we can use a binary search to find the maximum number of ships. Knowing in advance that we must build x ships might help us in finding an algorithm
Each workshop covers an interval of parts it can build. Let’s focus on the earliest part number - 1, there will be some workshops that can build this part. If there weren’t any (and x is not 0) then we cannot build x ships to begin with.
The issue comes when there are multiple workshops that could build part 1. What to do? We need to build this part x times. Each workshop may have different limits on the number of parts it can build and different sizes for its interval. The trick is to always pick the workshop whose interval ends the earliest. The workshop with the smallest final part number is the workshop to pick. Build as many parts as possible, at most x and at most the limit of the workshop. If we still need more part 1s, pick the second interval that ends the earliest. And so and so. If even after picking all the intervals that cover part 1, we cannot satisfy the requirement of x parts, the value of x is impossible. If we build all parts, remember to update the number of available parts from the workshop you used partially, and we now need to build part 2…
In general, for each part i, find all the workshops such that:
Their interval covers i
They can still build parts.
Of those, pick the workshop j that has a b[j] as small as possible. Repeat this until you make all the parts you can do or you run out of workshops when building a part.
There are two issues with this approach. The first one is to prove that it is correct. When picking the interval with the smallest b[j], you lose the smallest possible number of options for future choices. Picking another workshop won’t make the process any easier, since the interval with the smallest b[j] will stop being useful at an earlier part, this will only reduce the number of options we’ll have.
The second issue is does it run under 2 seconds? Note how we do all of this as part of a binary search. There are always at most n intervals and we can just search in O(n) to find the ones we care about. There are at most 1000000 parts. It should run in time albeit slowly. We can also use some data structures. A priority queue or similar can receive intervals as we find them and return the current interval with the smallest b[i] as we need it in O(log(n)) time.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
bool can_do(int m, vector < int > k, vector < int > a, vector < int > b, int x) {
int n = k.size();
// in this case, we use a std::set, because sets can work also as
// a priority queue, the_set.begin() will return the smallest element
set < pair < int, int > > intervals_b;
set < pair < int, int > > intervals_a;
// a set of pairs, pairs (i,j) will automatically be orderer first by
// i and use j as tie breaker. So a set of pairs {a[i],i}, sorts the
// workshops in increasing order of a[i]
for (int i = 0; i < n; i++) {
intervals_a.insert({
a[i],
i
});
}
for (int i = 1; i <= m; i++) {
// add any workshops with a[j] = i:
while (!intervals_a.empty() && intervals_a.begin() - > first == i) {
auto it = intervals_a.begin();
int j = it - > second;
intervals_a.erase(it);
intervals_b.insert({
b[j],
j
});
}
int y = x;
while (y > 0) {
if (intervals_b.empty()) {
return false; // no workshops available for this part
}
auto it = intervals_b.begin(); // pick the worshop with smallest b[j]
int j = it - > second;
// b[j] too small, its interval already ended
if (b[j] < i) {
k[j] = 0;
}
// how many parts to produce in this workshop:
int used = std::min(y, k[j]);
k[j] -= used;
y -= used;
if (k[j] == 0) {
// time to erase it.
intervals_b.erase(it);
}
}
}
return true;
}
int maxShips(int m, vector < int > k, vector < int > a, vector < int > b) {
// Binary search:
int lo = 0; // a value so low , you know it's possible
int hi = 1000000000; // a value so high, you know it's impossible
while (lo + 1 < hi) {
int x = hi - (hi - lo) / 2; // the middle
if (can_do(m, k, a, b, x)) {
lo = x;
} else {
hi = x;
}
}
return lo;
}
LimitedMemorySeries2
Problem Details
Consider the straightforward solution: For each element i in the array, define the diameter d = 0 and increase the diameter as long as both X[i-d] and X[i] exist and X[i-d] < X[i] and X[i+d] < X[i] . There’s apparently two reasons we cannot just use this approach. One is that it might be O(n^2) in time complexity which would be too much for the constraints and also it requires us to store the array in the memory, as it needs to go backwards and not just forward.
The generator works so that we can iterate through X[i] starting with i = 0 and always getting X[i] from the previous value of X[i-1]. What is interesting about the generator is that it uses xor instead of the usual modular product that tends to be used for this. We have:
X[i] = (((X[i-1] o+ a) + b)) & (m - 1);
Where o+ is bitwise-XOR, & is bitwise-AND and m = 2^50.
Taking x & (m - 1) really means removing all bits larger than 49. Because m = 2^50, the binary representation of 2^50 - 1 will be fifty 1-bits.
In the same way, (x + y) & (m - 1) means to add two integers together and then discard all bit positions greater than 49. It’s similar to adding two 50-bits (non-negative) integers ignoring possible overflow.
Given X[i], can we find X[i-1]? Let’s first find v = X[i-1] o+ a, if v is known, then X[i-1] = v o+ a, because applying xor twice reverses its effect:
X[i] = (v + b) & (m-1)
So we just need to reverse the 50-bit addition that may have possibly overflown. In reality, this is the same as just subtracting: v = (X[i] + m - b) & (m-1), we append an extra bit at position 50, do the subtraction and then we just cut the unnecessary bits again.
Therefore, X[i-1] = ( ( (X[i] + m - b) & (m-1) ) o+ a ) & (m-1). We can simplify it further, we only need one of the & (m-1) operations.
It turns out that we can use the straightforward algorithm without storing the array in memory. For each X[i], we can use two pointers, one that points to X[i-d] and one that points to X[i+d]. We can use the generator step to find X[i+d+1] from X[i+d] and we can use the logic found in the previous section to find X[i+d-1] from X[i+d].
How is this helpful? Isn’t this algorithm too slow anyway? Let’s try a more accurate complexity analysis. Consider the maximum element X[t] so that for every i, X[t] >= X[i]. There are t elements to the left of X[t] and n - t - 1 to its right. The number of steps needed to find the diameter of X[t] is min(t, n - t - 1). What’s useful here is that, since X[t] is the maximum element, when solving the diameter for the other elements, it will never be such that it includes X[t]. We can imagine that it’s as if we divided the problem in two smaller versions of the same problem, one of length t and one of length n - t - 1. This enables a proof by induction, assume that for n <= 1, finding the sum of diameters is O(n). Then when let’S solve for n = k, while assuming that the algorithm takes O® time for any (r < k): We find the position t of the maximum element, we have two smaller problems and solving them takes O(t) and O(n - t - 1) time, respectively, because t, n - t - 1 < k. In addition, we need min(t,n-t-1) steps for X[t]. Since n = t + (n - t - 1) + 1, then we can assume that doing the solutions for the two smaller subproblems is O(n). In addition, we need an O(n) process for the diameter of the largest element. In total, the complexity is O(n). In addition, this shows that the sum of diameters itself is O(n), the result will never exceed 1000000007.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
const long m = 1 LL << 50;
long a, b;
long next(long x) {
return (((x ^ a) + b)) & (m - 1);
}
long prev(long x) {
return ((x + m - b) ^ a) & (m - 1);
}
int getSum(int n, long x0, long a, long b) {
int res = 0;
long cur = x0;
this - > a = a;
this - > b = b;
for (int i = 0; i < n; i++) {
// solve for cur = X[i]:
// two pointers:
long p1 = cur, p2 = cur;
int j1 = i, j2 = i;
while (true) {
p1 = prev(p1); //move one pointer back
p2 = next(p2); // move the other forward
j1--; //their positions
j2++;
if (j1 < 0 || j2 >= n || p1 >= cur || p2 >= cur) {
break;
}
// since those elements are still smaller than cur, the diameter increases
res++;
}
cur = next(cur);
}
return res;
}
Let’s X_{i,j} denote the indicator variable that at some point, we add vals[i]*vals[j] to our score. By linearity of expectation, our answer is the sum of probability( X_{i,j} is 1)*vals[i]*vals[j] over all all 1 ≤ i < j ≤ n. Let Pr[i,j] denote the probability that X_{i,j} is 1. This depends on the number of coins left on the row, so let’s instead compute Pr[i,j,k] = probability that i,j contribute to final answer given there are k coins left in the row. We can compute recursively Pr[i,j,k] = (choose a coin before i) * Pr[i-1,j-1,k-1] + (choose a coin between i and j) * Pr[i, j-1, k] + (choose a coin after j) * Pr[i, j, k-1]. In addition, if i+2 == j, we add probability we choose coin i+1 to this as a base case.
This leads to an O(n^3) solution. See the code for more details
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
public class CoinFlips {
public double getExpectation(int[] vals, int prob) {
int n = vals.length;
double p = prob / 1000000000.;
double[] d2 = new double[n + 1];
d2[0] = 1.;
for (int i = 1; i <= n; i++) {
d2[i] = d2[i - 1] * (1 - p);
if (d2[i] < 1e-20) d2[i] = 0;
}
double[][] pc = new double[n + 1][n + 1];
double[][] ps = new double[n + 1][n + 1];
for (int len = 1; len <= n; len++) {
for (int idx = 1; idx <= len; idx++) {
pc[len][idx] = d2[idx - 1] * p;
if (idx == 1) pc[len][idx] += d2[len];
ps[len][idx] = ps[len][idx - 1] + pc[len][idx];
}
}
double[][] dp = new double[n + 2][n + 2];
// [i][j] -> probability that pair i,j contributes to final answer.
for (int len = 3; len <= n; len++) {
double[][] next = new double[n + 2][n + 2];
for (int a1 = 1; a1 <= len; a1++) {
for (int a2 = a1 + 2; a2 <= len; a2++) {
next[a1][a2] = ps[len][a1 - 1] * dp[a1 - 1][a2 - 1] +
(ps[len][a2 - 1] - ps[len][a1]) * dp[a1][a2 - 1] +
(1 - ps[len][a2]) * dp[a1][a2];
if (a1 + 2 == a2) next[a1][a2] += pc[len][a1 + 1];
}
}
dp = next;
}
double ret = 0;
for (int i = 1; i <= n; i++) {
for (int j = i + 2; j <= n; j++) {
ret += dp[i][j] * vals[i - 1] * vals[j - 1];
}
}
return ret;
}
}